# Cosine Attention Mechanism
Swinv2 Large Patch4 Window12to24 192to384 22kto1k Ft
Apache-2.0
Swin Transformer v2 is a vision Transformer model pre-trained on ImageNet-21k and fine-tuned on ImageNet-1k at 384x384 resolution, featuring hierarchical feature maps and local window self-attention mechanisms.
Image Classification
Transformers

S
microsoft
3,048
4
Swinv2 Large Patch4 Window12to16 192to256 22kto1k Ft
Apache-2.0
Swin Transformer v2 is a vision Transformer model that achieves efficient image classification and dense recognition tasks through hierarchical feature maps and local window self-attention mechanisms.
Image Classification
Transformers

S
microsoft
812
4
Swinv2 Base Patch4 Window16 256
Apache-2.0
Swin Transformer v2 is a vision Transformer model that achieves efficient image classification and dense recognition tasks through hierarchical feature maps and local window self-attention mechanisms.
Image Classification
Transformers

S
microsoft
1,853
3
Swinv2 Base Patch4 Window8 256
Apache-2.0
Swin Transformer v2 is a vision Transformer model that achieves efficient image classification and dense recognition tasks through hierarchical feature maps and local window self-attention mechanisms.
Image Classification
Transformers

S
microsoft
16.61k
7
Swinv2 Tiny Patch4 Window8 256
Apache-2.0
Swin Transformer v2 is a vision Transformer model pre-trained on ImageNet-1k, featuring hierarchical feature maps and local window self-attention mechanisms with linear computational complexity.
Image Classification
Transformers

S
microsoft
25.04k
10
Featured Recommended AI Models